Goto

Collaborating Authors

 AAAI AI-Alert for Mar 12, 2019


Would you be happy being interviewed by a robot?

BBC News

The world's first robot designed to carry out unbiased job interviews is being tested by Swedish recruiters. But can it really do a better job than humans? Measuring 41cm (16in) tall and weighing 35kg (77lbs) she's at eye level as she sits on top of a table directly across from the candidate she's about to interview. Her glowing yellow face tilts slightly to the side. Then she blinks and smiles lightly as she poses her first question: "Have you ever been interviewed by a robot before?"


The Genderless Digital Voice the World Needs Right Now

WIRED

Boot up the options for your digital voice assistant of choice and you're likely to find two options for the gender you prefer interacting with: male or female. The problem is, that binary choice isn't an accurate representation of the complexities of gender. Some folks don't identify as either male or female, and they may want their voice assistant to mirror that identity. But a group of linguists, technologists, and sound designers--led by Copenhagen Pride and Vice's creative agency Virtue--are on a quest to change that with a new, genderless digital voice, made from real voices, called Q. Q isn't going to show up in your smartphone tomorrow, but the idea is to pressure the tech industry into acknowledging that gender isn't necessarily binary, a matter of man or woman, masculine or feminine. The project is confronting a new digital universe fraught with problems.


Modern Policing: Algorithm Helps NYPD Spot Crime Patterns

U.S. News

The department disclosed its use of the technology only this month, with Levine and Cholas-Wood detailing their work in the INFORMS Journal on Applied Analytics in an article alerting other departments how they could create similar software. Speaking about it with the news media for the first time, they told The Associated Press recently that theirs is the first police department in the country to use a pattern-recognition tool like this.


Now any business can access the same type of AI that powered AlphaGo

#artificialintelligence

A startup called CogitAI has developed a platform that lets companies use reinforcement learning, the technique that gave AlphaGo mastery of the board game Go. Gaining experience: AlphaGo, an AI program developed by DeepMind, taught itself to play Go by practicing. It's practically impossible for a programmer to manually code in the best strategies for winning. Instead, reinforcement learning let the program figure out how to defeat the world's best human players on its own. Drug delivery: Reinforcement learning is still an experimental technology, but it is gaining a foothold in industry.


Machine Learning Can Use Tweets To Automatically Spot Critical Security Flaws

WIRED

At the endless booths of this week's RSA security trade show in San Francisco, an overflowing industry of vendors will offer any visitor an ad nauseam array of "threat intelligence" and "vulnerability management" systems. But it turns out that there's already a decent, free feed of vulnerability information that can tell systems administrators what bugs they really need to patch, updated 24/7: Twitter. And one group of researchers has not only measured the value of Twitter's stream of bug data, but is also building a piece of free software that automatically tracks it to pull out hackable software flaws and rate their severity. Researchers at Ohio State University, the security company FireEye, and research firm Leidos last week published a paper describing a new system that reads millions of tweets for mentions of software security vulnerabilities, and then, using their machine-learning-trained algorithm, assessed how much of a threat they represent based on how they're described. They found that Twitter can not only predict the majority of security flaws that will show up days later on the National Vulnerability Database--the official register of security vulnerabilities tracked by the National Institute of Standards and Technology--but that they could also use natural language processing to roughly predict which of those vulnerabilities will be given a "high" or "critical" severity rating with better than 80 percent accuracy.


The AI-Art Gold Rush Is Here

The Atlantic - Technology

The images are huge and square and harrowing: a form, reminiscent of a face, engulfed in fiery red-and-yellow currents; a head emerging from a cape collared with glitchy feathers, from which a shape suggestive of a hand protrudes; a heap of gold and scarlet mottles, convincing as fabric, propping up a face with grievous, angular features. These are part of "Faceless Portraits Transcending Time," an exhibition of prints recently shown at the HG Contemporary gallery in Chelsea, the epicenter of New York's contemporary-art world. All of them were created by a computer. The catalog calls the show a "collaboration between an artificial intelligence named AICAN and its creator, Dr. Ahmed Elgammal," a move meant to spotlight, and anthropomorphize, the machine-learning algorithm that did most of the work. According to HG Contemporary, it's the first solo gallery exhibit devoted to an AI artist.


Uber Not Criminally Liable In Death Of Woman Hit By Self-Driving Car, Prosecutor Says

NPR Technology

A video still from a mounted camera captures the moment before a self-driving Uber SUV fatally struck a woman in Tempe, Ariz., last March. A Yavapai County prosecutor found that Uber is not criminally liable for the crash. A video still from a mounted camera captures the moment before a self-driving Uber SUV fatally struck a woman in Tempe, Ariz., last March. A Yavapai County prosecutor found that Uber is not criminally liable for the crash. An Arizona prosecutor has determined that Uber is not criminally liable in the death of a Tempe woman who was struck by a self-driving test car last year.


Don't look now: why you should be worried about machines reading your emotions

The Guardian

Could a program detect potential terrorists by reading their facial expressions and behavior? This was the hypothesis put to the test by the US Transportation Security Administration (TSA) in 2003, as it began testing a new surveillance program called the Screening of Passengers by Observation Techniques program, or Spot for short. While developing the program, they consulted Paul Ekman, emeritus professor of psychology at the University of California, San Francisco. Decades earlier, Ekman had developed a method to identify minute facial expressions and map them on to corresponding emotions. This method was used to train "behavior detection officers" to scan faces for signs of deception.


Hacked Driverless Cars Could Cause Collisions And Gridlock In Cities, Say Researchers

#artificialintelligence

When 10-20% of vehicles are hacked, clusters of roads become inaccessible from each other. Even a small scale hack of automated cars could cause collisions and gridlock in Manhattan, hindering emergency services, according to the latest research. Researchers at Georgia Tech and Multiscale Systems Inc. investigated the'cyber-physical' risks of hacked Internet-connected vehicles, and this week will present their results to the 2019 American Physical Society March Meeting in Boston. The rise of connected cars, and the predicted future of automated cars have for some time being worrying regulators. However, until now most of the focus has been on preventing individual accidents, such as when a pedestrian was killed by a self-driving Uber in Arizona in 2018.


Shimi Will Now Sing to You in an Adorable Robot Voice

IEEE Spectrum Robotics

Human-robot interaction is easy to do badly, and very difficult to do well. One approach that has worked well for robots from R2-D2 to Kuri is to avoid the problem of language--rather than use real words to communicate with humans, you can do pretty well (on an emotional level, at least) with a variety of bleeps and bloops. But as anyone who's watched Star Wars knows, R2-D2 really has a lot going on with the noises that it makes, and those noises were carefully designed to be both expressive and responsive. Most actual robots don't have the luxury of a professional sound team (and as much post-production editing as you need), so the question becomes how to teach a robot to make the right noises at the right times. At Georgia Tech's Center for Music Technology (GTCMT), Gil Weinberg and his students have a lot of experience with robots that make noise of various sorts, and they've used a new deep learning-based technique to teach their musical robot Shimi a basic understanding of human emotions, and how to communicate back to those humans in just the right way, using music.